VQ VAE


Vector-quantized variational autoencoder (VQ VAE) is a generative model that uses vector quantization to learn discrete latent representations.

SATURN: Autoregressive Image Generation Guided by Scene Graphs

Add code
Aug 20, 2025
Viaarxiv icon

PCR-CA: Parallel Codebook Representations with Contrastive Alignment for Multiple-Category App Recommendation

Add code
Aug 25, 2025
Viaarxiv icon

VQ-VAE Based Digital Semantic Communication with Importance-Aware OFDM Transmission

Add code
Aug 12, 2025
Viaarxiv icon

Rethinking VAE: From Continuous to Discrete Representations Without Probabilistic Assumptions

Add code
Jul 23, 2025
Viaarxiv icon

Generative molecule evolution using 3D pharmacophore for efficient Structure-Based Drug Design

Add code
Jul 27, 2025
Viaarxiv icon

MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization

Add code
Jul 10, 2025
Viaarxiv icon

D-CNN and VQ-VAE Autoencoders for Compression and Denoising of Industrial X-ray Computed Tomography Images

Add code
Jul 10, 2025
Viaarxiv icon

Exploring Classical Piano Performance Generation with Expressive Music Variational AutoEncoder

Add code
Jul 02, 2025
Viaarxiv icon

Instella-T2I: Pushing the Limits of 1D Discrete Latent Space Image Generation

Add code
Jun 26, 2025
Viaarxiv icon

DicFace: Dirichlet-Constrained Variational Codebook Learning for Temporally Coherent Video Face Restoration

Add code
Jun 16, 2025
Viaarxiv icon